AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
Self-attention optimization

# Self-attention optimization

Koala Lightning 1b
KOALA-Lightning-1B is a knowledge distillation model based on SDXL-Lightning. It achieves efficient text-to-image generation by compressing the U-Net structure, with a parameter scale of 1.16B.
Text-to-Image
K
etri-vilab
390
7
Eris LelantaclesV2 7b
This model is a hybrid obtained by merging the Eros-7b-test and Eris-Lelanacles-7b 7B parameter models using the SLERP method
Large Language Model Transformers
E
ChaoticNeutrals
22
4
M7 7b
Apache-2.0
M7-7b is an experimental project that uses the mergekit tool to fuse multiple models with a 7B parameter scale, aiming to integrate the advantages of different models to improve performance.
Large Language Model Transformers
M
liminerity
8,909
16
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase